Two-Stage Second Order Training in Feedforward Neural Networks

نویسندگان

  • Melvin Deloyd Robinson
  • Michael T. Manry
چکیده

In this paper, we develop and demonstrate a new 2 order two-stage algorithm called OWO-Newton. First, two-stage algorithms are motivated and the Gauss Newton input weight Hessian matrix is developed. Block coordinate descent is used to apply Newton’s algorithm alternately to the input and output weights. Its performance is comparable to Levenberg-Marquardt and it has the advantage of reduced computational complexity. The algorithm is shown to have a form of affine invariance.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Various multistage ensembles for prediction of heating energy consumption

Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gløshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best n...

متن کامل

بهبود بازشناسی مقاوم الگو در شبکه های عصبی بازگشتی جاذب از طریق به کارگیری دینامیک های آشوب گونه

In this paper, two kinds of chaotic neural networks are proposed to evaluate the efficiency of chaotic dynamics in robust pattern recognition. The First model is designed based on natural selection theory. In this model, attractor recurrent neural network, intelligently, guides the evaluation of chaotic nodes in order to obtain the best solution. In the second model, a different structure of ch...

متن کامل

Efficient algorithm for training neural networks with one hidden layer

Efficient second order algorithm for training feedforward neural networks is presented. The algorithm has a similar convergence rate as the Lavenberg-Marquardt (LM) method and it is less computationally intensive and requires less memory. This is especially important for large neural networks where the LM algorithm becomes impractical. Algorithm was verified with several examples.

متن کامل

Cursive Word Recognition Based on Interactive Activation and Early Visual Processing Models

We present an off-line cursive word recognition system based completely on neural networks: reading models and models of early visual processing. The first stage (normalization) preprocesses the input image in order to reduce letter position uncertainty; the second stage (feature extraction) is based on the feedforward model of orientation selectivity; the third stage (letter pre-recognition) i...

متن کامل

A Parallel Implementation of Backpropagation Neural Network on MasPar MP-1

In this paper, we explore the parallel implementation of the backpropagation algorithm with and without hidden layers on MasPar MP-1. This implementation is based on a SIMD architecture, and uses a backpropagation model. Our implementation uses weight batching versus on-line updating of the weights which is used by most serial and parallel implementations of backpropagation. This method results...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013